Tired of checking iotop and seeing that your drbd partition is using 99.99% of io all the time and finding your drbd device performs slow in general?
This is especially an issue in versions of DRBD in the 8.3 tree in particular one documented case is on "8.3.13" but it likely applies to other devices.
The symptoms are that resyncing is fine and normal but any reasonable amount of activity is very slow and lagged and creates a high server load and consequently high io wait. You may not notice it though until you apply a reasonable load or usage to the server and DRBD device.
In some kernels and some versions you may get the following error in dmesg
block drbd0: [drbd0_worker/1670] sock_sendmsg time expired, ko = 4294967295
*But we have seen many cases where the above error is not present (perhaps an older kernel or module does not recognize it is being blocked).
Here are some quick sysctl.conf kernel tuning tips that have changed a server's load from 10-18, to less than 1.
Add or edit these in sysctl.conf to solve the issue
net.ipv4.tcp_rmem = 131072 131072 10485760
net.ipv4.tcp_wmem = 131072 131072 10485760
vm.dirty_ratio = 10
vm.dirty_background_ratio = 4
This is a preferable way rather than hacking or changing DRBD settings by upgrading the userland or kernel manually or while it runs (results can be unpredictable) whereas this kernel tuning has virtually no risk or impact on DRBD except a positive one.
drbd, jbd, highiowait, solutiontired, iotop, partition, io, performs, versions, documented, quot, applies, devices, resyncing, lagged, creates, server, consequently, usage, kernels, dmesg, _worker, sock_sendmsg, expired, ko, kernel, module, blocked, sysctl, conf, tuning, edit, ipv, tcp_rmem, tcp_wmem, vm, dirty_ratio, dirty_background_ratio, preferable, hacking, settings, upgrading, userland, manually, unpredictable, whereas, virtually,